Time series, sets of sequences in chronological order, are essential data in statistical research with many forecasting applications. Although recent performance in many Transformer-based models has been noticeable, long multi-horizon time series forecasting remains a very challenging task. Going beyond transformers in sequence translation and transduction research, we observe the effects of down-and-up samplings that can nudge temporal saliency patterns to emerge in time sequences. Motivated by the mentioned observation, in this paper, we propose a novel architecture, Temporal Saliency Detection (TSD), on top of the attention mechanism and apply it to multi-horizon time series prediction. We renovate the traditional encoder-decoder architecture by making as a series of deep convolutional blocks to work in tandem with the multi-head self-attention. The proposed TSD approach facilitates the multiresolution of saliency patterns upon condensed multi-heads, thus progressively enhancing complex time series forecasting. Experimental results illustrate that our proposed approach has significantly outperformed existing state-of-the-art methods across multiple standard benchmark datasets in many far-horizon forecasting settings. Overall, TSD achieves 31% and 46% relative improvement over the current state-of-the-art models in multivariate and univariate time series forecasting scenarios on standard benchmarks. The Git repository is available at https://github.com/duongtrung/time-series-temporal-saliency-patterns.
translated by 谷歌翻译
通常承认,巨额(培训)数据的可用性是人工智能(AI)最近进步的最重要因素之一。但是,数据集通常用于狭窄的AI子区域中的特定任务,并且没有统一的方式来管理和访问它们。这不仅在培训或部署机器学习模型时创造了不必要的开销,但也限制了对数据的理解,这对于以数据为中心的AI非常重要。在本文中,我们向不同数据集的统一框架展示了我们的愿景,以便可以轻松地集成和查询,例如,使用标准查询语言。我们在持续的工作中展示了这一点,为计算机愿景中的数据集创建了一个框架,并在不同的场景中显示了它的优势。我们的演示可在https://vision.semkg.org中获得。
translated by 谷歌翻译